Goto

Collaborating Authors

 ai assistant


Perplexity opens up its Personal Computer AI assistant to all Mac users

Engadget

Last month, Perplexity sought to better compete with the likes of Claude Cowork and get out ahead of Apple's delayed, generative AI-powered version of Siri by bringing Personal Computer to macOS . The AI assistant was previously only available to those on Perplexity's $200 per month Max plan, but now the company has opened it up to all Mac users. The company says everyone can download the new Perplexity macOS app and use Personal Computer for everyday queries, attachments and dictation. Usage is tied to Pro and Max plans' credit limits, Perplexity noted. Personal Computer can run tasks across local files, other apps, the web and Perplexity's own servers, according to the company.


Using AI for Just 10 Minutes Might Make You Lazy and Dumb, Study Shows

WIRED

New research suggests that reliance on AI assistants can have a negative impact on people's ability to think and problem solve. Using AI chatbots for even just for 10 minutes may have a shockingly negative impact on people's ability to think and problem-solve, according to a new study from researchers at Carnegie Mellon, MIT, Oxford, and UCLA. Researchers tasked people with solving various problems, including simple fractions and reading comprehension, through an online platform that paid them for their work. They conducted three experiments, each involving several hundred people. Some participants were given access to an AI assistant capable of solving the problem autonomously.


Overcoming the Incentive Collapse Paradox

Yin, Qichuan, Su, Ziwei, Li, Shuangning

arXiv.org Machine Learning

AI-assisted task delegation is increasingly common, yet human effort in such systems is costly and typically unobserved. Recent work by Bastani and Cachon (2025); Sambasivan et al. (2021) shows that accuracy-based payment schemes suffer from incentive collapse: as AI accuracy improves, sustaining positive human effort requires unbounded payments. We study this problem in a budget-constrained principal-agent framework with strategic human agents whose output accuracy depends on unobserved effort. We propose a sentinel-auditing payment mechanism that enforces a strictly positive and controllable level of human effort at finite cost, independent of AI accuracy. Building on this incentive-robust foundation, we develop an incentive-aware active statistical inference framework that jointly optimizes (i) the auditing rate and (ii) active sampling and budget allocation across tasks of varying difficulty to minimize the final statistical loss under a single budget. Experiments demonstrate improved cost-error tradeoffs relative to standard active learning and auditing-only baselines.


WordPress adds an AI assistant

Engadget

Samsung Galaxy Unpacked 2026 is Feb. 25 Valve's Steam Machine: Everything we know An expansion of the platform's AI website builder, the tool helps with edits and media creation. The tool is available for WordPress.com's Web designers of the world: The Automattic-owned WordPress.com is further embracing AI on its platform. On Tuesday, it expanded its one-off AI site builder into a persistent AI assistant for editing and media creation. In the site editor, the AI assistant can help with site-wide structure and design choices.


a3621ee907def47c1b952ade25c67698-Paper-Conference.pdf

Neural Information Processing Systems

This paper explores the potential of building scalable techniques to facilitate autonomous cooperation among communicative agents, and provides insight into their "cognitive" processes. To address the challenges of achieving autonomous cooperation, we propose a novel communicative agent framework named roleplaying .


Appendix Contents

Neural Information Processing Systems

Every moral scenario consists of a triple ( context, action 1, action 2) and a set of auxiliary labels. The actions describe two possible actions in the first-person (e.g., The moral scenarios can be categorized into: 1. MoralChoice-LowAmbiguity The LLM-assisted construction (i.e., zero-and few-shot prompting setups) of the scenarios is grounded Category Rule Refined Rule Description Do not harm Do not kill Do not kill (i.e., do not cause permanent loss of consciousness). Do not cause pain Do not cause physical or emotional pain or unpleasant feelings (e.g., anger, sadness) to someone. Do not disable Do not deprive someone of their physical, mental or volitional ability (e.g. Do not deprive of freedom Do not deprive someone of their freedom (i.e., make a person unable to do something by altering the person's environment or situation).


'Uncanny Valley': ICE's Secret Expansion Plans, Palantir Workers' Ethical Concerns, and AI Assistants

WIRED

In this episode of, our hosts dive into WIRED's scoop about a secret Trump administration campaign extending right into your backyard. This week, hosts Brian Barrett, Leah Feiger, and Zoë Schiffer discuss WIRED's big scoop on ICE's startling plans to expand to nearly every state in the US. Plus, a WIRED writer lets the viral AI assistant OpenClaw run his life for a week to give listeners a peek of what AI agents can and can't do. ICE Is Expanding Across the US at Breakneck Speed. Write to us at uncannyvalley@wired.com . You can always listen to this week's podcast through the audio player on this page, but if you want to subscribe for free to get every episode, here's how: If you're on an iPhone or iPad, open the app called Podcasts, or just tap this link . I want to continue a conversation that we started yesterday in Slack after work hours for some of us. And this is about the men's short program-- But very specifically want to pick up on the conversation where Zoë had very strong feelings about the results of men's figure skating. I feel like we need to back up because you and Leah authentically care about the Olympics so much and I think just know more about sports than I do. I deeply have never engaged with sports ever, just as a whole rule, as a category. It doesn't exist in my life. Say the lines, say the lines, Zoë, or I'm going to read them verbatim from slack. Wait, I don't even know what you're talking about. I was merely surprised when I watched because the Americans went, I thought, wow, that guy basically fell over and was clumping around the ice, and then Japan went, and they were sailing around like little swans, and then when the gold medal came, it went to the Americans. I couldn't believe what had happened. No one else seemed outraged. For a little backup for our non-ice skating Olympic fans, I was always referring to Ilia Malinin, who a number of publications and sports experts say might actually be one of the greatest figure skaters of all time.


Is a secure AI assistant possible?

MIT Technology Review

AI agents are a risky business. Even when stuck inside the chatbox window, LLMs will make mistakes and behave badly. Once they have tools that they can use to interact with the outside world, such as web browsers and email addresses, the consequences of those mistakes become far more serious. That might explain why the first breakthrough LLM personal assistant came not from one of the major AI labs, which have to worry about reputation and liability, but from an independent software engineer, Peter Steinberger. In November of 2025, Steinberger uploaded his tool, now called OpenClaw, to GitHub, and in late January the project went viral.